Comments about the article in Nature: What ChatGPT and generative AI mean for Science

Following is a discussion about this article in Nature Vol 614 9 February 2023, by Chris Stokel-Walker and Richard van Noorden.
To study the full text select this link: https://www.nature.com/articles/d41586-023-00340-6 In the last paragraph I explain my own opinion.

Reflection


Introduction

In December, computational biologists Casey Greene and Milton Pividori embarked on an unusual experiment: they asked an assistant who was not a scientist to help them improve three of their research papers.
The emphasis is on improve. What does that mean.
In one biology manuscript, their helper even spotted a mistake in a reference to an equation.
Okay.
This assistant, as Greene and Pividori reported in a preprint1 on 23 January, is not a person but an artificial-intelligence (AI) algorithm called GPT-3, first released in 2020
In reality this means it is a computer program, excuted on a computer and designed by a team. A group of scientists.
The most famous of these tools, also known as large language models, or LLMs, is ChatGPT, etc
Okay
“I’m really impressed,” says Pividori, who works at the University of Pennsylvania in Philadelphia. “This will help us be more productive as researchers.”
What that means in reality that he uses the ideas of others. And in general the most common or simple ideas.
If you are involved in a car accident ChatGPT will tell you to drive slower or take driving lessons.
Other scientists say they now regularly use LLMs not only to edit manuscripts, but also to help them write or check code and to brainstorm ideas.
To write computer code seems to me an almost impossible task, because you have to write down in great detail what the computer is supossed to do. In case you want to write a programm to calculate the square root of the number 3 you have to specify the algorithm used. The value of 1 is too small. The value 2 is too high. The final value (?) is some where in between. This raises the question who is intelligent: the person who writes this algorithm or the machine who calculates the answer?
The most practically way for ChatGPT to follow is first to search on the Internet for program libraries which contain programmes in the language he needs. What he also needs are good descriptions what each of these programmes are supposed to do. Finally he has to compare these descriptions which what is requested and Bingo.
To check code almost involves similar problems.
ChatGPT’s creator, OpenAI in San Francisco, California, has announced a subscription service for $20 per month, promising faster response times and priority access to new features (although its trial version remains free).
Okay.
But LLMs have also triggered widespread concern — from their propensity to return falsehoods, to worries about people passing off AI-generated text as their own.
That is a general concern. If you want to learn anything, the task of learning, understanding, getting practical experience and improving, you have to do it yourself.
When Nature asked researchers about the potential uses of chatbots such as ChatGPT, particularly in science, their excitement was tempered with apprehension.
These researchers should ask them self the question: How does ChatGPT work. What can it really do. What is the impression that ChatGPT gives? Is ChatGPT intelligent?

"1. Fluent but not factual"

But researchers emphasize that LLMs are fundamentally unreliable at answering questions, sometimes generating false responses. “We need to be wary when we use these systems to produce knowledge,” says Osmanovic Thunström.
That is 100% correct.
The result is that LLMs easily produce errors and misleading information, particularly for technical topics that they might have had little data to train on.
The more data you give them more difficult this becomes i.e., to discover the truth. This is a whole different problem as face recognition, where you know the answer.
LLMs also can’t show the origins of their information; if asked to write an academic paper, they make up fictitious citations.
All of that means that their final output can not be classified as either right or wrong, and that ChatGPT does not take any responsible.

"2. Can shortcomings be solved"

For now, ChatGPT is not trained on sufficiently specialized content to be helpful in technical topics, some scientists say.
That is the question. IMO the answer is no.
THe problem, as far as I see it, ChatGPT can only reproduce of what is available on the Internet. If you can ask him for legal advice he can only litterally copy that text, but ofcourse that is not enough, he must also tell you what the source is of this text. He can also tell what the evolution is, but every time he must tell you: when and where.
The demo was pulled from public access (although its code remains available) after users got it to produce inaccuracies and racism.
From an accademic point of view this is misleading conduct. THis same subject is also discussed in the next paragraph.

"3. Safety and responsibility"

Besides directly producing toxic content, there are concerns that AI chatbots will embed historical biases or ideas about the world from their training data, such as the superiority of particular cultures, says Shobita Parthasarathy, director of a science, technology and public-policy programme at the University of Michigan in Ann Arbor.
The bassic concept is, that you can only learn something, from events that happened in the past. That means that a certain event is true, that this are the facts and that they can be validated. If you study astronomy the most interesting part is, during the ages, what people thought as correct. That does not mean you should ban all books that advocated these wrong ideas. That is 'stupid'.
Achieving that, however, required human moderators to label screeds of toxic text.
When ChatGPT also uses human moderators, the whole concept of "Generative AI", becomes misleading.
Last year, a group of academics released an alternative LLM, called BLOOM. The researchers tried to reduce harmful outputs by training it on a smaller selection of higher-quality, multilingual text sources.
More information is required how the researchers tried to train to BLOOM. It is rather silly to make a list, as part of BLOOM, of words the LLM is not supposed to use.
Some researchers say that academics should refuse to support large commercial LLMs altogether.
And what about the silly use of WhatsApp in our iphone's?
Besides issues such as bias, safety concerns and exploited workers, these computationally intensive algorithms also require a huge amount of energy to train, raising concerns about their ecological footprint.
The same remark.

"4. Enforcing honest use"

Setting boundaries for these tools, then, could be crucial, some researchers say. Edwards suggests that existing laws on discrimination and bias (as well as planned regulation of dangerous uses of AI) will help to keep the use of LLMs honest, transparent and fair.
Than these laws should be very clear on what honest, transparant and fair means.
Anyway it will be interesting how you train a LLM with only the text of these laws as input, without any clarification, to make the output of the LLM's honest, transparant and fair.
One key technical question is whether AI-generated content can be spotted easily.
My own guess is, that this will become more and more difficult. The best strategy is to become a designer.
However, none of these tools claims to be infallible, particularly if AI-generated text is subsequently edited.
Who performs this editing and why?
Meanwhile, LLM creators are busy working on more sophisticated chatbots built on larger data sets (OpenAI is expected to release GPT-4 this year) — including tools aimed specifically at academic or medical work. In late December, Google and DeepMind published a preprint about a clinically-focused LLM it called Med-PaLM7. The tool could answer some open-ended medical queries almost as well as the average human physician could, although it still had shortcomings and unreliabilities.
Eric Topol, director of the Scripps Research Translational Institute in San Diego, California, says he hopes that, in the future, AIs that include LLMs might even aid diagnoses of cancer, and the understanding of the disease, by cross-checking text from academic literature against images of body scans.
The easiest strategy to follow is to compare image of actual body scans, with a library of collected body scans. But such a strategy does not improve our understanding of the disease.


Reflection 1 - How Intelligent is ChatGPT and how Intelligent are its designers

The truth is that ChatGPT is a computer program, using the logic implied in the program. If you want to make the program 'more accurate' you must modify the program. It is extremely difficult to train a program like you can train a human being. Only in very simply cases where the rules of how you train a human, can be described very precise. In these simple cases a computer program can do the same.

For example, in the game of chess you should mark down the position of all the pieces and your last move after which your oponent wins. The next time when the same situation arises you should make a different move. When you do that accurately (more or less) your game of chess will improve. A computer program, which follows exactly this same strategy, also his game of chess will slowly increase. Do you call such a program intelligent? When such a program remembers all the moves he has done, is that praogram than becoming more intelligent? Whats in the name?

In actual fact the designers are more intelligent as the current version of ChatGPT. The designers can modify the program and improve instantaneous their chance of winning against the current version of ChatGPT. For example the designers can implement all the opening sets as explained in https://en.wikipedia.org/wiki/List_of_chess_openings After trying all these openings, and assuming that the computer remember all the games played, slowly the advantage of the designers disappears.


Reflection 2


If you want to give a comment you can use the following form Comment form


Created: 20 December 2022

Back to my home page Index
Back to Nature comments Nature Index